Skip to main content
Version: 12.10.0

Kubernetes

Overview

Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight letters between the "K" and the "s".

Alt text

Traditional Deployments: Initially, organizations hosted their applications on standalone physical servers. Without the ability to set resource limits on these servers, applications often competed for resources, leading to potential performance issues. For instance, one application could monopolize the server's resources, causing others to perform poorly. The straightforward fix was to allocate a separate server for each application, but this approach was inefficient and costly due to underused resources and the expense of managing numerous servers.

Virtualized Deployments: Virtualization emerged as a remedy, enabling the operation of multiple Virtual Machines (VMs) on a single server's CPU. This technology provided application isolation within VMs, enhancing security by preventing applications from accessing each other's data.

Virtualization improved the efficiency of physical server resource use and scalability, as new applications could be easily added or updated. It also reduced hardware expenses by representing physical resources as a cluster of expendable virtual machines.

Each VM operates as a complete unit with its own OS and virtualized hardware components.

Container Deployments: Containers are akin to VMs but offer a lighter form of isolation by sharing the OS between applications, making them more lightweight. Containers maintain their own filesystem, CPU allocation, memory, and process space and are independent of the underlying infrastructure, allowing for portability across different cloud environments and OS distributions.

The popularity of containers stems from several advantages they offer:

Agile Application Creation and Deployment: Container images are simpler and more efficient to create than VM images.

Continuous Development, Integration, and Deployment: Facilitates consistent and frequent building and deploying of container images, with swift rollbacks enabled by image immutability.

Dev and Ops Roles Distinction: Container images are produced at build/release time, separating applications from the infrastructure.

Observability: Provides insights into OS-level metrics and application health.

Consistent Environments Across Development, Testing, and Production: Ensures uniformity, running identically on different environments from laptops to cloud platforms.

Portability Across Clouds and OS Distributions: Compatible with various operating systems and cloud infrastructures.

Application-Centric Management: Shifts focus from running an OS on virtual hardware to running applications on an OS with logical resources.

Microservices Architecture: Supports a distributed, loosely coupled approach, allowing for dynamic deployment and management of smaller, independent application segments.

Resource Isolation: Guarantees stable application performance.

Resource Utilization: Achieves high efficiency and density.

Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example: Kubernetes can easily manage a canary deployment for your system.

Kubernetes provides you with:

Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.

Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.

Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.

Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.

Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.

Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.

Batch execution In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.

Horizontal scaling Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage.

IPv4/IPv6 dual-stack Allocation of IPv4 and IPv6 addresses to Pods and Services

Designed for extensibility Add features to your Kubernetes cluster without changing upstream source code.

Kubernetes Components

When you deploy Kubernetes, you get a cluster.

A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.

The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.

Alt text

Azure Kubernetes Service (AKS)

AKS allows you to quickly deploy a production ready Kubernetes cluster in Azure

**Azure Kubernetes Service (AKS) **is a managed Kubernetes service that you can use to deploy and manage containerized applications. You need minimal container orchestration expertise to use AKS. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. AKS is an ideal platform for deploying and managing containerized applications that require high availability, scalability, and portability, and for deploying applications to multiple regions, using open-source tools, and integrating with existing DevOps tools.

AKS reduces the complexity and operational overhead of managing Kubernetes by shifting that responsibility to Azure. When you create an AKS cluster, Azure automatically creates and configures a control plane for you at no cost. The Azure platform manages the AKS control plane, which is responsible for the Kubernetes objects and worker nodes that you deploy to run your applications. Azure takes care of critical operations like health monitoring and maintenance, and you only pay for the AKS nodes that run your applications.

Alt text

Cluster components

An AKS cluster is divided into two main components:

Control plane: The control plane provides the core Kubernetes services and orchestration of application workloads. Nodes: Nodes are the underlying virtual machines (VMs) that run your applications.

Alt text

Useful Kubernetes commandds

Kubernetes is a powerful container orchestration tool that comes with a command-line interface (CLI) called kubectl. Here are some useful kubectl commands along with their explanations:

kubectl get pods

Retrieves a list of pods running in the current namespace. A pod is the smallest deployable unit in Kubernetes that can host one or more containers.

kubectl create -f <file.yaml> 

Creates a resource specified in a YAML or JSON file. This can be used to create services, deployments, and other Kubernetes objects.

kubectl apply -f <file.yaml>

Applies changes to a resource from a file. This is similar to create but is idempotent and can be used to update resources in place.

kubectl delete -f <file.yaml>

Deletes resources defined in a YAML or JSON file.

kubectl describe <resource> <name>

Shows detailed information about a specific resource, such as a pod, service, or deployment. For example, kubectl describe pod my-pod.

kubectl logs <pod-name>

Fetches the logs for a specific pod. This is useful for debugging issues with applications running inside pods.

kubectl exec -it <pod-name> -- <command>

Executes a command inside a container within a pod. The -it flags allow you to interact with the command shell.

kubectl port-forward <pod-name> <local-port>:<pod-port>

Forwards one or more local ports to a pod. This is useful for accessing a pod from your local machine for testing or debugging purposes.

kubectl scale deployment <deployment-name> --replicas=<num>

Scales a deployment to the specified number of replicas. This can increase or decrease the number of pod instances.

kubectl rollout status deployment/<deployment-name>

Monitors the status of the latest rollout until it's complete.

kubectl rollout undo deployment/<deployment-name>

Rolls back to the previous deployment in case of issues with the current version.

kubectl top pod

Displays CPU and memory usage for pods in the current namespace.

kubectl config current-context

Displays the current Kubernetes context. The context includes the cluster, namespace, and user information.

kubectl config use-context <context-name>

Switches to a different context. Useful when managing multiple clusters.

kubectl set image deployment/<deployment-name> <container-name>=<new-image>

Updates the image of a container within a deployment to a new version.

These commands are just the tip of the iceberg when it comes to managing Kubernetes clusters. kubectl offers a comprehensive set of commands to manage nearly every aspect of your cluster and workloads. It's recommended to refer to the official Kubernetes documentation or use kubectl --help for more detailed information on each command.